Goto

Collaborating Authors

 xiang fang


#AAAI2026 social media round up: part 2

AIHub

The 40th AAAI Conference on Artificial Intelligence took place in Singapore from 20-27 January, the first time that the event has been held outside of North America. In our first social media round up we had a peak at the first half of the conference which hosted the tutorials, the bridge programme, and the doctoral and undergraduate consortia, as well as the start of the technical programme. Now, we pick some highlights from the second half, which saw a number of invited talks, technical sessions, posters, and the workshops. Do VLMs actually'see' or just rely on priors? He showed how models fail to count stripes on a shoe simply because they recognize the'Adidas' logo and hallucinate the standard 3 stripes.






Interview with Xiang Fang: Multi-modal learning and embodied intelligence

AIHub

His research focuses on multi-modal learning, specifically advancing large vision-language models, embodied intelligence, and out-of-distribution detection. Xiang has published over 40 papers in top-tier venues, including CVPR, NeurIPS, ICML, AAAI, and ACM MM. He is the recipient of multiple awards, including the NTU Research Excellence Award and Best Student Paper at MIPR 2024, and serves as a reviewer for major AI conferences."

  Country:
  Genre:
  Industry:

Learning from Few Samples: A Novel Approach for High-Quality Malcode Generation

Ma, Haijian, Liu, Daizong, Cai, Xiaowen, Zhou, Pan, Xie, Yulai

arXiv.org Artificial Intelligence

Intrusion Detection Systems (IDS) play a crucial role in network security defense. However, a significant challenge for IDS in training detection models is the shortage of adequately labeled malicious samples. To address these issues, this paper introduces a novel semi-supervised framework \textbf{GANGRL-LLM}, which integrates Generative Adversarial Networks (GANs) with Large Language Models (LLMs) to enhance malicious code generation and SQL Injection (SQLi) detection capabilities in few-sample learning scenarios. Specifically, our framework adopts a collaborative training paradigm where: (1) the GAN-based discriminator improves malicious pattern recognition through adversarial learning with generated samples and limited real samples; and (2) the LLM-based generator refines the quality of malicious code synthesis using reward signals from the discriminator. The experimental results demonstrate that even with a limited number of labeled samples, our training framework is highly effective in enhancing both malicious code generation and detection capabilities. This dual enhancement capability offers a promising solution for developing adaptive defense systems capable of countering evolving cyber threats.